1
超越通用提示的进阶
AI011Lesson 7
00:00

通过微调与专用架构实现优化

1. 超越提示本身

尽管“少样本”提示是一个强大的起点,但要扩展AI解决方案,通常需要转向 监督式微调。这一过程将特定知识或行为直接固化到模型权重中。

决策建议: 只有当响应质量的提升和令牌成本的降低超过所需的大量计算与数据准备工作时,才应进行微调。

$成本 = 令牌数 × 单价$

2. 小型语言模型(SLM)的革命

小型语言模型(SLMs) 是其庞大版本的高度高效、精简版(例如:Phi-3.5、Mistral Small)。它们基于高度精选、高质量的数据进行训练。

权衡取舍: SLMs 显著降低了延迟,并支持边缘部署(在设备本地运行),但牺牲了大规模大模型所具备的广泛、通用的“类人”智能。

3. 专用架构

  • 专家混合(MoE):一种在保持推理阶段计算效率的同时扩大模型总规模的技术。对于任意一个输入标记,仅激活部分“专家”(例如:Phi-3.5-MoE)。
  • 多模态:专为同时处理文本、图像甚至音频而设计的架构,使应用场景突破文本生成范畴(例如:Llama 3.2)。
效率层级
始终优先尝试 提示工程 。若无效,则采用 RAG (检索增强生成)。仅在最后阶段作为高级优化手段使用 微调
model_selection.py
TERMINALbash — 80x24
> Ready. Click "Run" to execute.
>
Question 1
When does the course recommend proceeding with fine-tuning over prompt engineering?
When the benefits in quality and cost (reduced token usage) outweigh compute effort.
Whenever you need the model to sound more human-like.
As the very first step before trying RAG or prompt engineering.
Only when deploying to an edge device.
Question 2
Which model architecture allows scaling model size while maintaining computational efficiency?
Supervised Fine-Tuning (SFT)
Retrieval-Augmented Generation (RAG)
Mixture of Experts (MoE)
Multimodality
Challenge: Edge Deployment Strategy
Apply your knowledge to a real-world scenario.
You need to deploy a multilingual translation tool that runs locally on a laptop with limited GPU resources.
Task 1
Select the appropriate model family and tokenizer for this multilingual, low-resource task.
Solution:
Mistral NeMo with the Tekken Tokenizer. It is optimized for multilingual text and fits within SLM constraints.
Task 2
Define the deployment framework for high-performance local inference.
Solution:
Use ONNX Runtime or Ollama for local execution to maximize hardware acceleration on the laptop.